By now you have already seen the pervasive amount of AI "content" and "features" that companies put out on the daily; AI images, AI chatbots, AI tools, etc, that are all making their way into corporate products. There is a lot of misunderstanding for AI as a whole, specifically towards LLMs (large language models), thanks to misinformation and false advertising, and people now believe that AI is intelligence due to the illusion of intelligence it portrays. However, the truth is that LLMs are not actually "intelligent"; they don't actually understand or comprehend what you tell them or ask them, they don't understand photos you input, and they especially aren't "sentient". Within this page, I'll do some debunking.
AI in current software.
What actually is an LLM?
A large language model, sometimes referred within the blanket term as AI, is a ginormous algorithm that decides the correct outputs based on the inputs given. Notice how I said "algorithm", which means there isn't any actual comprehension of the input, but rather, the language model works just like the YouTube algorithm or Google's search queries. The simplest way I can put it is that all chatbot LLMs, including ChatGPT, are essentially just the world's largest switch cases. The only reason it seems so convicing is because it's dataset is ginormous to the point that it can cover nearly any input given.
That isn't to say, however, that the model as a pre-defined response for every possible input. The model follows a set of rules and can interchange text as need be in order to give unique responses. Despite this, the model isn't stringing together genuine responses, it's just using the data it already contains. If an LLM were truly sentient, it would be able to formulate thoughts, store, and be able to express them even if they weren't given or instructed to by the dataset given.
Are LLMs actually intelligent, and does it actually understand what I say?
Short answer; no. Slightly longer answer; it would be literally impossible. An LLM or AI in general is physically incapable of actual understanding because of one important thing; a conscience. This is why humans and animals can learn, but a computer can't. A computer's "thinking" is 0s and 1s, and can only follow instructions. The reality is that computer hardware, no matter how advanced it becomes, is not designed to develop a conscience the same way a human brain is.
So you're telling me it's all impossible?
Surprisingly, there is a way to develop an AI with conscienceness, and it is possibly the scariest and most dystopian way out there; lab-cultivated neurons being connected to computer hardware. Now, you may be thinking that this is hypothetical and complete psuedo science but unfortunately it exists.